89 research outputs found

    Watch Your Step! Terrain Traversability for Robot Control

    Get PDF
    Watch your step! Or perhaps, watch your wheels. Whatever the robot is, if it puts its feet, tracks, or wheels in the wrong place, it might get hurt; and as robots are quickly going from structured and completely known environments towards uncertain and unknown terrain, the surface assessment becomes an essential requirement. As a result, future mobile robots cannot neglect the evaluation of terrain’s structure, according to their driving capabilities. With the objective of filling this gap, the focus of this study was laid on terrain analysis methods, which can be used for robot control with particular reference to autonomous vehicles and mobile robots. Giving an overview of theory related to this topic, the investigation not only covers hardware, such as visual sensors or laser scanners, but also space descriptions, such as digital elevation models and point descriptors, introducing new aspects and characterization of terrain assessment. During the discussion, a wide number of examples and methodologies are exposed according to different tools and sensors, including the description of a recent method of terrain assessment using normal vectors analysis. Indeed, normal vectors has demonstrated great potentialities in the field of terrain irregularity assessment in both on‐road and off‐road environments

    LIDAR-based Driving Path Generation Using Fully Convolutional Neural Networks

    Full text link
    In this work, a novel learning-based approach has been developed to generate driving paths by integrating LIDAR point clouds, GPS-IMU information, and Google driving directions. The system is based on a fully convolutional neural network that jointly learns to carry out perception and path generation from real-world driving sequences and that is trained using automatically generated training examples. Several combinations of input data were tested in order to assess the performance gain provided by specific information modalities. The fully convolutional neural network trained using all the available sensors together with driving directions achieved the best MaxF score of 88.13% when considering a region of interest of 60x60 meters. By considering a smaller region of interest, the agreement between predicted paths and ground-truth increased to 92.60%. The positive results obtained in this work indicate that the proposed system may help fill the gap between low-level scene parsing and behavior-reflex approaches by generating outputs that are close to vehicle control and at the same time human-interpretable.Comment: Changed title, formerly "Simultaneous Perception and Path Generation Using Fully Convolutional Neural Networks

    LIDAR-Camera Fusion for Road Detection Using Fully Convolutional Neural Networks

    Full text link
    In this work, a deep learning approach has been developed to carry out road detection by fusing LIDAR point clouds and camera images. An unstructured and sparse point cloud is first projected onto the camera image plane and then upsampled to obtain a set of dense 2D images encoding spatial information. Several fully convolutional neural networks (FCNs) are then trained to carry out road detection, either by using data from a single sensor, or by using three fusion strategies: early, late, and the newly proposed cross fusion. Whereas in the former two fusion approaches, the integration of multimodal information is carried out at a predefined depth level, the cross fusion FCN is designed to directly learn from data where to integrate information; this is accomplished by using trainable cross connections between the LIDAR and the camera processing branches. To further highlight the benefits of using a multimodal system for road detection, a data set consisting of visually challenging scenes was extracted from driving sequences of the KITTI raw data set. It was then demonstrated that, as expected, a purely camera-based FCN severely underperforms on this data set. A multimodal system, on the other hand, is still able to provide high accuracy. Finally, the proposed cross fusion FCN was evaluated on the KITTI road benchmark where it achieved excellent performance, with a MaxF score of 96.03%, ranking it among the top-performing approaches

    A method for real-time dynamic fleet mission planning for autonomous mining

    Get PDF
    This paper introduces a method for dynamic fleet mission planning for autonomous mining (in loop-free maps), in which a dynamic fleet mission is defined as a sequence of static fleet missions, each generated using a modified genetic algorithm. For the case of static fleet mission planning (where each vehicle completes just one mission), the proposed method is able to reliably generate, within a short optimization time, feasible fleet missions with short total duration and as few stops as possible. For the dynamic case, in simulations involving a realistic mine map, the proposed method is able to generate efficient dynamic plans such that the number of completed missions per vehicle is only slightly reduced as the number of vehicles is increased, demonstrating the favorable scaling properties of the method as well as its applicability in real-world cases

    Energy minimization for an electric bus using a genetic algorithm

    Get PDF
    Background and methods: This paper addresses, in simulation, energy minimization of an autonomous electric minibus operating in an urban environment. Two different case studies have been considered, each involving a total of 10 different 2?km bus routes and two different average speeds. In the proposed method,\ua0the minibus follows an optimized speed profile, generated using a genetic algorithm. Results: In the first case study the vehicle was able to reduce its energy consumption by around 7 to 12% relative to a baseline case in which it maintains a constant speed between stops, with short acceleration and deceleration phases. In the second case study, involving mass variation (passengers entering and alighting) it was demonstrated that the number of round trips that can be completed on a single battery charge is increased by around 10% using the proposed method

    clustering and pca for reconstructing two perpendicular planes using ultrasonic sensors

    Get PDF
    In this paper, the authors make use of sonar transducers to detect the corner of two orthogonal panels and they propose a strategy for accurately reconstructing the surfaces. In order to point a linear array of four sensors at the desired position, the motion of a digital motor is appropriately controlled. When the sensors are directed towards the intersection between the planes, longer times of flight are observed because of multiple reflections. All the concerned distances have to be excluded and that is why an indicator based on the output signal energy is introduced. A clustering technique allows for the partitioning of the dataset in three clusters and the indicator selects the subset containing misrepresented information. The remaining distances are corrected so as to take into consideration the directivity and they permit the plotting of two sets of points in a three-dimensional space. In order to leave out the outliers, each set is filtered by means of a confidence ellipsoid which is defined by the Principal Component Analysis (PCA). The best-fit planes are obtained based on the principal directions and the variances. Experimental tests and results are shown demonstrating the effectiveness of this new approach

    Unevenness Point Descriptor for Terrain Analysis in Mobile Robot Applications

    Get PDF
    In recent years, the use of imaging sensors that produce a three-dimensional representation of the environment has become an efficient solution to increase the degree of perception of autonomous mobile robots. Accurate and dense 3D point clouds can be generated from traditional stereo systems and laser scanners or from the new generation of RGB-D cameras, representing a versatile, reliable and cost-effective solution that is rapidly gaining interest within the robotics community. For autonomous mobile robots, it is critical to assess the traversability of the surrounding environment, especially when driving across natural terrain. In this paper, a novel approach to detect traversable and non-traversable regions of the environment from a depth image is presented that could enhance mobility and safety through integration with localization, control and planning methods. The proposed algorithm is based on the analysis of the normal vector of a surface obtained through Principal Component Analysis and it leads to the definition of a novel, so defined, Unevenness Point Descriptor. Experimental results, obtained with vehicles operating in indoor and outdoor environments, are presented to validate this approach

    three different approaches for localization in a corridor environment by means of an ultrasonic wide beam

    Get PDF
    In this paper the authors present three methods to detect the position and orientation of an observer, such as a mobile robot, with respect to a corridor wall. They use an inexpensive sensor to spread a wide ultrasonic beam. The sensor is rotated by means of an accurate servomotor in order to propagate ultrasonic waves towards a regular wall. Whatever the wall material may be the scanning surface appears to be an acoustic reflector as a consequence of low air impedance. The realized device is able to give distance information in each motor position and thus permits the derivation of a set of points as a ray trace-scanner. The dataset contains points lying on a circular arc and relating to strong returns. Three different approaches are herein considered to estimate both the slope of the wall and its minimum distance from the sensor. Slope and perpendicular distance are the parameters of a target plane, which may be calculated in each observer's position to predict its new location. Experimental tests and simulations are shown and discussed by scanning from different stationary locations. They allow the appreciation of the effectiveness of the proposed approaches

    Lidar–camera semi-supervised learning for semantic segmentation

    Get PDF
    In this work, we investigated two issues: (1) How the fusion of lidar and camera data can improve semantic segmentation performance compared with the individual sensor modalities in a supervised learning context; and (2) How fusion can also be leveraged for semi-supervised learning in order to further improve performance and to adapt to new domains without requiring any additional labelled data. A comparative study was carried out by providing an experimental evaluation on networks trained in different setups using various scenarios from sunny days to rainy night scenes. The networks were tested for challenging, and less common, scenarios where cameras or lidars individually would not provide a reliable prediction. Our results suggest that semi-supervised learning and fusion techniques increase the overall performance of the network in challenging scenarios using less data annotations

    A cross-country comparison of user experience of public autonomous transport

    Get PDF
    Autonomous solutions for transportation are emerging worldwide, and one of the sectors that will benefit the most from these solutions is the public transport by shifting toward the new paradigm of Mobility as a Service (MaaS). Densely populated areas cannot afford an increase in individual transportation due to space limitation, congestion, and pollution. Working towards more effective and inclusive mobility in public areas, this paper compares user experiences of autonomous public transport across Baltic countries, with the final goal of gaining an increased insight into public needs. User experience was evaluated through questionnaires gathered along pilot projects implementing a public transportation line, using an automated electric minibus between 2018 and 2019. To have sufficient diversity in the data, the pilot projects were implemented in several cities in the Baltic Sea Area. The data analysed in this paper specifically refer to the cities of Helsinki (Finland), Tallinn (Estonia), Kongsberg (Norway), and GdaƄsk (Poland). Across all cities, passengers provided remarkably positive feedback regarding personal security and safety onboard. The overall feedback, which was very positive in general, showed statistically significant differences across the groups of cities (Kongsberg, Helsinki, Tallinn and Gdansk), partially explicable by the differences in the route design. In addition, across all cities and feedback topics, males gave a lower score compared to females. The overall rating suggests that there is a demand for future last-mile automated services that could be integrated with the MaaS concept, although demand changes according to socio-economic and location-based conditions across different countries
    • 

    corecore